74 research outputs found

    A New Rational Algorithm for View Updating in Relational Databases

    Full text link
    The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. In order to apply the rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented in this paper, along with the concept of a generalized revision algorithm for knowledge bases (Horn or Horn logic with stratified negation). We show that knowledge base dynamics has an interesting connection with kernel change via hitting set and abduction. In this paper, we show how techniques from disjunctive logic programming can be used for efficient (deductive) database updates. The key idea is to transform the given database together with the update request into a disjunctive (datalog) logic program and apply disjunctive techniques (such as minimal model reasoning) to solve the original update problem. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of a hitting set is carried out through a hyper tableaux calculus and magic set that is focused on the goal of minimality.Comment: arXiv admin note: substantial text overlap with arXiv:1301.515

    Towards a Cloud-Based Service for Maintaining and Analyzing Data About Scientific Events

    Full text link
    We propose the new cloud-based service OpenResearch for managing and analyzing data about scientific events such as conferences and workshops in a persistent and reliable way. This includes data about scientific articles, participants, acceptance rates, submission numbers, impact values as well as organizational details such as program committees, chairs, fees and sponsors. OpenResearch is a centralized repository for scientific events and supports researchers in collecting, organizing, sharing and disseminating information about scientific events in a structured way. An additional feature currently under development is the possibility to archive web pages along with the extracted semantic data in order to lift the burden of maintaining new and old conference web sites from public research institutions. However, the main advantage is that this cloud-based repository enables a comprehensive analysis of conference data. Based on extracted semantic data, it is possible to determine quality estimations, scientific communities, research trends as well the development of acceptance rates, fees, and number of participants in a continuous way complemented by projections into the future. Furthermore, data about research articles can be systematically explored using a content-based analysis as well as citation linkage. All data maintained in this crowd-sourcing platform is made freely available through an open SPARQL endpoint, which allows for analytical queries in a flexible and user-defined way.Comment: A completed version of this paper had been accepted in SAVE-SD workshop 2017 at WWW conferenc

    CoDEL - A Relationally Complete Language for Database Evolution

    Get PDF
    Software developers adapt to the fast-moving nature of software systems with agile development techniques. However, database developers lack the tools and concepts to keep pace. Data, already existing in a running product, needs to be evolved accordingly, usually by manually written SQL scripts. A promising approach in database research is to use a declarative database evolution language, which couples both schema and data evolution into intuitive operations. Existing database evolution languages focus on usability but did not aim for completeness. However, this is an inevitable prerequisite for reasonable database evolution to avoid complex and error-prone workarounds. We argue that relational completeness is the feasible expressiveness for a database evolution language. Building upon an existing language, we introduce CoDEL. We define its semantic using relational algebra, propose a syntax, and show its relational completeness

    Robust and simple database evolution

    Get PDF
    Software developers adapt to the fast-moving nature of software systems with agile development techniques. However, database developers lack the tools and concepts to keep the pace. Whenever the current database schema is evolved, the already existing data needs to be evolved as well. This is usually realized with manually written SQL scripts, which is error-prone and explains significant costs in software projects. A promising solution are declarative database evolution languages, which couple both schema and data evolution into intuitive operations. Existing database evolution languages focus on usability but do not strive for completeness. However, this is an inevitable prerequisite to avoid complex and error-prone workarounds. We present CODEL which is based on an existing language but is relationally complete. We precisely define its semantic using relational algebra, propose a syntax, and formally validate its relational completeness. Having a complete and comprehensive database evolution language facilitates valuable support throughout the whole evolution of a database. As an instance, we present VACO, a tool supporting developers with variant co-evolution. Given a variant schema derived from a core schema, VACO uses the richer semantics of CODEL to semi-automatically co-evolve this variant with the core

    A comprehensive quality assessment framework for scientific events

    Get PDF
    Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors

    A comprehensive quality assessment framework for scientific events

    Get PDF
    Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors. © 2020, The Author(s)

    Analysing the evolution of computer science events leveraging a scholarly knowledge graph: a scientometrics study of top-ranked events in the past decade

    Get PDF
    The publish or perish culture of scholarly communication results in quality and relevance to be are subordinate to quantity. Scientific events such as conferences play an important role in scholarly communication and knowledge exchange. Researchers in many fields, such as computer science, often need to search for events to publish their research results, establish connections for collaborations with other researchers and stay up to date with recent works. Researchers need to have a meta-research understanding of the quality of scientific events to publish in high-quality venues. However, there are many diverse and complex criteria to be explored for the evaluation of events. Thus, finding events with quality-related criteria becomes a time-consuming task for researchers and often results in an experience-based subjective evaluation. OpenResearch.org is a crowd-sourcing platform that provides features to explore previous and upcoming events of computer science, based on a knowledge graph. In this paper, we devise an ontology representing scientific events metadata. Furthermore, we introduce an analytical study of the evolution of Computer Science events leveraging the OpenResearch.org knowledge graph. We identify common characteristics of these events, formalize them, and combine them as a group of metrics. These metrics can be used by potential authors to identify high-quality events. On top of the improved ontology, we analyzed the metadata of renowned conferences in various computer science communities, such as VLDB, ISWC, ESWC, WIMS, and SEMANTiCS, in order to inspect their potential as event metrics

    Hadronic light-by-light corrections to the muon g-2: the pion-pole contribution

    Full text link
    The correction to the muon anomalous magnetic moment from the pion-pole contribution to the hadronic light-by-light scattering is considered using a description of the pi0 - gamma* - gamma* transition form factor based on the large-Nc and short-distance properties of QCD. The resulting two-loop integrals are treated by first performing the angular integration analytically, using the method of Gegenbauer polynomials, followed by a numerical evaluation of the remaining two-dimensional integration over the moduli of the Euclidean loop momenta. The value obtained, a_{mu}(LbyL;pi0) = +5.8 (1.0) x 10^{-10}, disagrees with other recent calculations. In the case of the vector meson dominance form factor, the result obtained by following the same procedure reads a_{mu}(LbyL;pi0)_{VMD} = +5.6 x 10^{-10}, and differs only by its overall sign from the value obtained by previous authors. Inclusion of the eta and eta-prime poles gives a total value a_{mu}(LbyL;PS) = +8.3 (1.2) x 10^{-10} for the three pseudoscalar states. This result substantially reduces the difference between the experimental value of a_{mu} and its theoretical counterpart in the standard model.Comment: 27 pages, Latex, 3 figures. v2: version to be published in Phys. Rev. D, Note added and references updated (don't worry, sign has not changed
    • …
    corecore